AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Low-precision and efficient inference

# Low-precision and efficient inference

Mistral Small 3.2 24B Instruct 2506 Bf16
Apache-2.0
This is an MLX format model converted from Mistral-Small-3.2-24B-Instruct-2506, suitable for instruction following tasks.
Large Language Model Supports Multiple Languages
M
mlx-community
163
1
Deepseek R1 GGUF
MIT
DeepSeek-R1 is a 1.58-bit dynamically quantized large language model optimized by Unsloth, adopting the MoE architecture and supporting English task processing.
Large Language Model English
D
unsloth
2.0M
1,045
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase